256 research outputs found

    Capitalisation d'une ressource en or : le dictionnaire

    No full text
    The goal of this paper is to explore extensions to electronic dictionaries. Adding certain functions could considerably extend the range of tasks for which they could provide support. Putting the needed information at the distance of a mouse click would allow for active reading. This would require tight coupling of the dictionary with a text editor: all the information in the dictionary should be accessible via a mouseclick. Dictionaries combined with a flashcard system and an exercise generator could support the memorization and automation of words and syntactic structures. Finally, structuring the dictionary in a way akin to the human mind (associative network) could help the writer to find new ideas, and if needed, the word he is looking for. In sum, rather than considering the dictionary just as another component of the process of language production or comprehension chain, we consider it as the single most important resource, provided that one knows how to use it

    Let's get the student into the driver's seat

    Full text link
    Speaking a language and achieving proficiency in another one is a highly complex process which requires the acquisition of various kinds of knowledge and skills, like the learning of words, rules and patterns and their connection to communicative goals (intentions), the usual starting point. To help the learner to acquire these skills we propose an enhanced, electronic version of an age old method: pattern drills (henceforth PDs). While being highly regarded in the fifties, PDs have become unpopular since then, partially because of their lack of grounding (natural context) and rigidity. Despite these shortcomings we do believe in the virtues of this approach, at least with regard to the acquisition of basic linguistic reflexes or skills (automatisms), necessary to survive in the new language. Of course, the method needs improvement, and we will show here how this can be achieved. Unlike tapes or books, computers are open media, allowing for dynamic changes, taking users' performances and preferences into account. Building an electronic version of PDs amounts to building an open resource, accomodatable to the users' ever changing needs.Comment: 6 page

    SystĂšme d'aide Ă  l'accĂšs lexical : trouver le mot qu'on a sur le bout de la langue

    No full text
    International audienceThe study of the Tip of the Tongue phenomenon (TOT) provides valuable clues and insights concerning the organisation of the mental lexicon (meaning, number of syllables, relation with other words, etc.). This paper describes a tool based on psycho-linguistic observations concerning the TOT phenomenon. We've built it to enable a speaker/writer to find the word he is looking for, word he may know, but which he is unable to access in time. We try to simulate the TOT phenomenon by creating a situation where the system knows the target word, yet is unable to access it. In order to find the target word we make use of the paradigmatic and syntagmatic associations stored in the linguistic databases. Our experiment allows the following conclusion: a tool like SVETLAN, capable to structure (automatically) a dictionary by domains can be used sucessfully to help the speaker/writer to find the word he is looking for, if it is combined with a database rich in terms of paradigmatic links like EuroWordNet

    Analyzing the Performance of GPT-3.5 and GPT-4 in Grammatical Error Correction

    Full text link
    GPT-3 and GPT-4 models are powerful, achieving high performance on a variety of Natural Language Processing tasks. However, there is a relative lack of detailed published analysis of their performance on the task of grammatical error correction (GEC). To address this, we perform experiments testing the capabilities of a GPT-3.5 model (text-davinci-003) and a GPT-4 model (gpt-4-0314) on major GEC benchmarks. We compare the performance of different prompts in both zero-shot and few-shot settings, analyzing intriguing or problematic outputs encountered with different prompt formats. We report the performance of our best prompt on the BEA-2019 and JFLEG datasets, finding that the GPT models can perform well in a sentence-level revision setting, with GPT-4 achieving a new high score on the JFLEG benchmark. Through human evaluation experiments, we compare the GPT models' corrections to source, human reference, and baseline GEC system sentences and observe differences in editing strategies and how they are scored by human raters

    Modelling lexical phrases acquisition in L2

    Get PDF
    The study focuses on the following points. It compares the views psycholinguists and computational linguists have concerning the processes of lexical access and lexical choice. Then it shows the similarities holding between the structure of the lexicon in L1 and in L2. It tries to offer a pedagogically realistic approach in vocabulary teaching based on these results

    Use of Activity-Based Probes to Develop High Throughput Screening Assays That Can Be Performed in Complex Cell Extracts

    Get PDF
    Background: High throughput screening (HTS) is one of the primary tools used to identify novel enzyme inhibitors. However, its applicability is generally restricted to targets that can either be expressed recombinantly or purified in large quantities. Methodology and Principal Findings: Here, we described a method to use activity-based probes (ABPs) to identify substrates that are sufficiently selective to allow HTS in complex biological samples. Because ABPs label their target enzymes through the formation of a permanent covalent bond, we can correlate labeling of target enzymes in a complex mixture with inhibition of turnover of a substrate in that same mixture. Thus, substrate specificity can be determined and substrates with sufficiently high selectivity for HTS can be identified. In this study, we demonstrate this method by using an ABP for dipeptidyl aminopeptidases to identify (Pro-Arg)2-Rhodamine as a specific substrate for DPAP1 in Plasmodium falciparum lysates and Cathepsin C in rat liver extracts. We then used this substrate to develop highly sensitive HTS assays (Z’.0.8) that are suitable for use in screening large collections of small molecules (i.e.300,000) for inhibitors of these proteases. Finally, we demonstrate that it is possible to use broad-spectrum ABPs to identify target-specific substrates. Conclusions: We believe that this approach will have value for many enzymatic systems where access to large amounts o
    • 

    corecore